Goto

Collaborating Authors

 ai-generated article


A Wikipedia Group Made a Guide to Detect AI Writing. Now a Plug-In Uses It to 'Humanize' Chatbots

WIRED

A Wikipedia Group Made a Guide to Detect AI Writing. The web's best resource for spotting AI writing has ironically become a manual for AI models to hide it. On Saturday, tech entrepreneur Siqi Chen released an open source plug-in for Anthropic's Claude Code AI assistant that instructs the AI model to stop writing like an AI model. Called Humanizer, the simple prompt plug-in feeds Claude a list of 24 language and formatting patterns that Wikipedia editors have listed as chatbot giveaways. Chen published the plug-in on GitHub, where it has picked up more than 1,600 stars as of Monday.


AI Text Detectors and the Misclassification of Slightly Polished Arabic Text

Almohaimeed, Saleh, Almohaimeed, Saad, Jari, Mousa, Alobaid, Khaled A., Alotaibi, Fahad

arXiv.org Artificial Intelligence

Many AI detection models have been developed to counter the presence of articles created by artificial intelligence (AI). However, if a human-authored article is slightly polished by AI, a shift will occur in the borderline decision of these AI detection models, leading them to consider it as AI-generated article. This misclassification may result in falsely accusing authors of AI plagiarism and harm the credibility of AI detectors. In English, some efforts were made to meet this challenge, but not in Arabic. In this paper, we generated two datasets. The first dataset contains 800 Arabic articles, half AI-generated and half human-authored. We used it to evaluate 14 Large Language models (LLMs) and commercial AI detectors to assess their ability in distinguishing between human-authored and AI-generated articles. The best 8 models were chosen to act as detectors for our primary concern, which is whether they would consider slightly polished human-authored text as AI-generated. The second dataset, Ar-APT, contains 400 Arabic human-authored articles polished by 10 LLMs using 4 polishing settings, totaling 16400 samples. We use it to evaluate the 8 nominated models and determine whether slight polishing will affect their performance. The results reveal that all AI detectors incorrectly attribute a significant number of articles to AI. The best performing LLM, Claude-4 Sonnet, achieved 83.51\%, its performance decreased to 57.63\% for articles slightly polished by LLaMA-3. Whereas the best performing commercial model, originality.AI, achieves 92\% accuracy, dropped to 12\% for articles slightly polished by Mistral or Gemma-3.


Autonomous AI imitators increase diversity in homogeneous information ecosystems

Johansen, Emil Bakkensen, Baumann, Oliver

arXiv.org Artificial Intelligence

Recent breakthroughs in large language models (LLMs) have facilitated autonomous AI agents capable of imitating human-generated content. This technological advancement raises fundamental questions about AI's impact on the diversity and democratic value of information ecosystems. We introduce a large-scale simulation framework to examine AI-based imitation within news, a context crucial for public discourse. By systematically testing two distinct imitation strategies across a range of information environments varying in initial diversity, we demonstrate that AI-generated articles do not uniformly homogenize content. Instead, AI's influence is strongly context-dependent: AI-generated content can introduce valuable diversity in originally homogeneous news environments but diminish diversity in initially heterogeneous contexts. These results illustrate that the initial diversity of an information environment critically shapes AI's impact, challenging assumptions that AI-driven imitation uniformly threatens diversity. Instead, when information is initially homogeneous, AI-driven imitation can expand perspectives, styles, and topics. This is especially important in news contexts, where information diversity fosters richer public debate by exposing citizens to alternative viewpoints, challenging biases, and preventing narrative monopolies, which is essential for a resilient democracy.


People who frequently use ChatGPT for writing tasks are accurate and robust detectors of AI-generated text

Russell, Jenna, Karpinska, Marzena, Iyyer, Mohit

arXiv.org Artificial Intelligence

In this paper, we study how well humans can detect text generated by commercial LLMs (GPT-4o, Claude, o1). We hire annotators to read 300 non-fiction English articles, label them as either human-written or AI-generated, and provide paragraph-length explanations for their decisions. Our experiments show that annotators who frequently use LLMs for writing tasks excel at detecting AI-generated text, even without any specialized training or feedback. In fact, the majority vote among five such "expert" annotators misclassifies only 1 of 300 articles, significantly outperforming most commercial and open-source detectors we evaluated even in the presence of evasion tactics like paraphrasing and humanization. Qualitative analysis of the experts' free-form explanations shows that while they rely heavily on specific lexical clues ('AI vocabulary'), they also pick up on more complex phenomena within the text (e.g., formality, originality, clarity) that are challenging to assess for automatic detectors. We release our annotated dataset and code to spur future research into both human and automated detection of AI-generated text.


How Beloved Indie Blog 'The Hairpin' Turned Into an AI Clickbait Farm

WIRED

Almost every day, a publication announces layoffs or shuts down. Sports Illustrated just let go almost all of its staff after weathering an embarrassing scandal about AI-generated articles. It's unclear what the desiccated magazine's future holds, but the sad fate of another formerly great outlet offers a preview of what may await fallen media properties. In 2018, the indie women's website The Hairpin stopped publishing, along with its sister site The Awl. This year, The Hairpin has been Frankensteined back into existence and stuffed with slapdash AI-generated articles designed to attract search engine traffic.


AI-generated content can sometimes slip into your Google News feed

Engadget

Correction, January 18, 2024, 4:55 PM ET: This story originally claimed that AI-generated content was being promoted in Google News. We did not note that to find such stories required heavily manipulating the search results in Google News, so much so that it didn't surface an original, more legitimate source. As 404 Media itself writes, "Both of these rip-off articles appear in Google News search results. The first appears when searching for "Star Wars theory" and setting the results to the past 24 hours. The second appears when searching for the subject of the article with a similar 24 hour setting."


AI Generates Articles with Potentially Risky YMYL Content - Bytefeed - News Powered by AI

#artificialintelligence

Artificial Intelligence (AI) is becoming increasingly popular in the world of content creation. AI-generated articles are now being used to create serious, Your Money or Your Life (YMYL) content for websites and other digital platforms. The use of AI-generated articles has been growing steadily over the past few years as more businesses recognize its potential to produce high quality, engaging content quickly and efficiently. AI can be used to generate both short form and long form pieces that cover a wide range of topics from finance and health care to travel and lifestyle. AI-generated YMYL content is particularly useful for businesses looking to provide accurate information on important topics such as financial advice, medical advice, legal advice or any other type of topic where accuracy is essential.


CNET Secretly Used AI on Articles That Didn't Disclose That Fact, Staff Say

#artificialintelligence

A CNET spokesperson didn't respond to a request for comment, but in a story published today, The Verge corroborated our source's allegations in several ways. Last week, we reported that the prominent tech news site CNET had been quietly posting dozens of articles that had been written by an AI system. After a public outcry, we discovered that in spite of the Red Ventures-owned publication's promise that all the AI-generated articles were being diligently fact checked by a human editor, the AI was making many extremely basic errors. CNET responded by issuing an extensive correction and slapping a warning label on the rest of the content -- as well, oddly, as adding a disclaimer to many human-written articles about AI topics. If all of CNET's AI-generated articles had been marked as such, you could probably write the whole thing off as a dismal, miserly attempt to eliminate the jobs of entry-level writers.


CNET Is Quietly Publishing Entire Articles Generated By AI

#artificialintelligence

Next time you're on your favorite news site, you might want to double check the byline to see if it was written by an actual human. CNET, a massively popular tech news outlet, has been quietly employing the help of "automation technology" -- a stylistic euphemism for AI -- on a new wave of financial explainer articles, seemingly starting around November of last year. In the absence of any formal announcement or coverage, it appears that this was first spotted by online marketer Gael Breton in a tweet on Wednesday. The articles are published under the unassuming appellation of "CNET Money Staff," and encompass topics like "Should You Break an Early CD for a Better Rate?" or "What is Zelle and How Does It Work?" That byline obviously does not paint the full picture, and so your average reader visiting the site likely would have no idea that what they're reading is AI-generated.

  Industry: Media > News (1.00)